Stereoscopic Space Map – Semi-immersive Configuration of 3D- stereoscopic Tours in Multi-display Environments
نویسندگان
چکیده
Although large-scale stereoscopic 3D environments like CAVEs are a favorable location for group presentations, the perspective projection and stereoscopic optimization usually follows a navigator-centric approach. Therefore, these presentations are usually accompanied by strong side-effects, such as motion sickness which is often caused by a disturbed stereoscopic vision. The reason is that the stereoscopic visualization is usually optimized for the only head-tracked person in the CAVE – the navigator – ignoring the needs of the real target group – the audience. To overcome this misconception, this work proposes an alternative to the head tracking-based stereoscopic effect optimization. By using an interactive virtual overview map in 3D, the pre-tour and on-tour configuration of the stereoscopic effect is provided, partly utilizing our previously published interactive projection plane approach. This Stereoscopic Space Map is visualized by the zSpace 200®, whereas the virtual world is shown on a panoramic 330° CAVE2. A pilot expert study with eight participants was conducted using pre-configured tours through 3D models. The comparison of the manual and automatic stereoscopic adjustment showed that the proposed approach is an appropriate alternative to the nowadays commonly used head tracking-based stereoscopic adjustment. Introduction In the last two decades, CAVEs (CAVE Automatic Virtual Environments) were used for a wide range of applications. Traditional CAVEs – such as the one developed by Cruz-Neira et al. in the 1990s – consist of at least three display components, representing the front, side, top and/or bottom perspective [1]. A well-known alternative are HMDs (head-mounted displays). Coming a long way since 1968 [2], HMDs – such as the Oculus Rift – might finally become in 2016 the future standard in many VR-relevant application areas, although the visual acuity and binocular resolution of CAVEs are usually significantly higher than the one of HMDs [3]. The major advantage is the mobility, the easy set-up of these systems and the low acquisition costs. It can be predicted that many application areas which were formerly reserved for expensive CAVE-related display setups in the past will be soon completely covered by these HMDs. Therefore, the research and development of large display environments have to be focused now on their specific application areas. Stereoscopic Visualization in CAVEs A major drawback of CAVEs is that – although they are huge enough to be used by multiple persons – it is only possible to optimize the Stereoscopic 3D (S3D) visualization for a single head-tracked person (in case the visualization of the virtual world should be consistent over all perspectives). But a large advantage in comparison to HMDs is the fact that the virtual world (which is the digital world the user should be immersed in), the real world (which is the world we are living in), as well as the people inside the CAVE are visible. Therefore, they are optimal for group presentations and should also be optimized for this kind of presentations. As previously mentioned, the stereoscopic visualization in CAVEs is usually based on a tracked pair of glasses worn by the navigator – the person who navigates through a virtual environment during the presentation. In this way, the position of the tracked pair of glasses can be used as a reference point to compute the eye distance based on the distance between the tracked glasses and the closest object in the virtual world. As long as the audience – formed by the passive attendees of a presentation – is physically located close to the navigator and the navigator is not physically moving, this approach might be sufficient at first glance. But moreover, the projection matrices are recomputed based on the tracked pair of glasses so that the perspective projection depends on the point of view of the navigator. Therefore, a) the perspective projection and b) the stereoscopic adjustment is distorted for the audience, and c) if the navigator starts to physically move, also the virtual environment seems to be in motion from the perspective of the audience. Of course, especially in large CAVEs, the audience is usually separated from the navigator, because the navigator is usually also the presenter, communicating with the audience while navigating with a wand-like device. Summing up it can be stated that the use of head-tracking in group presentation should be prevented in CAVEs. An alternative method is required to optimize the stereoscopic effect to prevent strong side-effects. It is known that a bad configuration of the eye distance can cause eye strain, head aches, vertigo, or even motion sickness, also known as cyber sickness [4]–[8]. Especially the last mentioned problem occurs quite often in virtual environments such as CAVEs. ©2016 Society for Imaging Science and Technology DOI: 10.2352/ISSN.2470-1173.2016.5.SDA-429 IS&T International Symposium on Electronic Imaging 2016 Stereoscopic Displays and Applications XXVII SDA-429.1 Pre-tour and on-tour stereoscopic configuration There are various ways to tackle and analyze these visual problems, e.g. for stereoscopic motion pictures [9]–[11]. Previously we have introduced an approach to optimize the stereoscopic setting in interactive environments – the interactive projection plane S3D method [12]. The introduced methodology is especially relevant in case virtual environments with huge differences in scale have to be explored, such as biological cell models with scale variations of up to a factor of 100,000. For example, at the mesoscopic level, a ribosome has a size of 23 nm, a wall of a plant cell might have a size of approx. 15,000 nm. At the molecular level, a smaller molecule might have a size of around 0.1 nm. These differences in scale do usually not have to be taken into account for regular virtual environments, as it is not required to move very close to extremely small objects. Therefore, in regular virtual worlds, like for example architectural models, the near clipping plane of the virtual camera usually prevents that objects can move very close to the camera. We therefore present here a new method to optimize the stereoscopic effect • pre-tour, i.e., during the preparation of a tour without an audience, and • on-tour, i.e., during the presentation in front of an audience. Given a set of 3D models – in our case a number of biological cell models – these models should be presented to an audience. A related real life scenario would be the exploration of different cell components by the navigator with the purpose to explain their location, shape and functionality to an audience inside a CAVE. Based on the cell model, a tour has to be prepared. The tour consists of a number of tour points and their sequential order. Each tour point contains information about a) the camera position, b) the camera orientation, and c) the stereoscopic eye distance at this certain position. During the configuration of the tour, no audience is present and the navigator can prepare the tour. We call this the pre-tour situation. During the on-tour situation, the navigator is presenting the preconfigured tour to an audience. Moreover, it might occur that during the presentation the audience asks the navigator to move to a specific position in the virtual world which is not part of the preconfigured tour. In this case, it is required that the stereoscopic eye distance is computed based on the new position in the virtual world. Therefore, both potential tour-related situations have to be covered by our approach. Stereoscopic configuration of CAVE’s with a semi- immersive hybrid-dimensional monitor Nowadays, many different CAVE configurations exist, e.g. [1], [13], [14]. Here, we use the CAVE2 which is a circular display environment providing a 330° panorama view [3], [13], [15]. With a diameter of 7.40 m it accommodates groups of up to 20 people, such as school classes, university seminars or company delegations. In its center, it provides space to communicate and to discuss the visualized environment from different perspectives, providing a direct combination of real and virtual world. To configure CAVE tours, a device is required which is able to a) show the stereoscopic virtual map of the model visualized in the CAVE including a visual representation of the tour points to enable a precise orientation and navigation inside the virtual world, b) providing 2D visualization to show e.g. the sequential order of the tour and provide a GUI to change the settings of the tour and the stereo effect, and c) to be able to directly test the stereoscopic configuration. For this purpose, a zSpace 200®, a semi-immersive 3D monitor is used. We developed a software for the zSpace which shows an overview map of the virtual world shown in the CAVE2 (following the world’s in miniature – WIM – metaphor [16]). We call this approach Stereoscopic Space Map (or short in the following sections: Space Map). Different navigation methods were implemented to change the view in the virtual world by using the Space Map, partly based on our previous work [17]. A huge advantage of this approach is the fact that the tour and its stereoscopic optimization can be done using the zSpace before entering the CAVE2, because the use of this large-scale virtual environment is quite cost-intensive. Methods Since their invention, CAVEs are used for different purposes, but very frequently they are used for group presentations. Usually, only a single person or a very small group guides the tour through the virtual environment. Often, this is done by using wand-like devices. Alternatively, visual navigation interfaces providing access to additional information, e.g., tablet computers have been used for more than a decade in CAVEs [18]. But standard tablets are not able to stereoscopically visualize and interact with spatial structures in 3D space. Stereoscopic Space Map Figure 1. Stereoscopic Space Map – Hardware: This illustration shows the 3D monitors (here: semi-transparent) of the CAVE2. Four of them are each connected to a single node (n). The nodes are connected to a head node server (s). The zSpace system (z) communicate with the server . Figure 2. Stereoscopic Space Map – Hardware: Right bottom: the zSpace 200® showing the map of the cell, in the background: the CAVE2 environment, showing the nuclear region of the cell (red/cyan anaglyph stereo image) ©2016 Society for Imaging Science and Technology DOI: 10.2352/ISSN.2470-1173.2016.5.SDA-429 IS&T International Symposium on Electronic Imaging 2016 Stereoscopic Displays and Applications XXVII SDA-429.2 To enable pre-tour and on-tour stereoscopic effect configuration, we developed the Stereoscopic Space Map. On one hand, this map is used to provide the navigator with a preview of the stereoscopic setting, on the other hand it tries to calculate the best eye distance based on the distance to the closest object to the actual camera position (see section Stereoscopic 3D Methods). During presentations in CAVE2, the navigator as well as the audience usually lack an overview of a complex environment. Of course, an overview like a map can be shown directly on the displays of the CAVE2 [19]. Especially CAVE2 is often used for hybrid-dimensional visualization and interaction, combining 2D and 3D approaches [17]. But this hybrid reality visualization will disturb the immersion as well as the 3D-stereoscopic effect, in case the CAVE2 should just represent the panorama window to the virtual world – and here, a 3D model should be explored (see also section Stereoscopic 3D Methods). Hardware The hardware used consists of a zSpace 200® [15, SOHD15], representing the navigator’s display, whereas the virtual world is displayed by CAVE2 [15]. CAVE2 is a circular display environment consisting of 20 four-panel columns (with 46" 3D LCDs) providing a 330° panorama view [13], [15] (Figure 1). With a diameter of 7.40 m and a height of 2.70 m it is appropriate for audiences with approx. 20 members. Because the current navigation device, the wand (in our case a Sony PlayStation 3® controller connected to the CAVE2), should be replaced by the Space Map and a fluent and precise navigation should be enabled, high-resolution 3D interaction is required. The zSpace 200 is a passive Full HD 23” 3D monitor equipped with infrared-light-based head tracking system and a zStylus pen with three buttons and vibration capability for 3D interaction [20]. Another positive aspect is the fact that the zSpace is a quite mobile and compact device – therefore it takes approx. 10 minutes to place the zSpace system in the center of the CAVE2. Both, the CAVE2 as well as the zSpace use circularly polarized glasses for creating the 3D effect, therefore the parallel use of both technologies is possible. The zSpace system used in context of this work consists of the zSpace 200 monitor, an optional standard 2D monitor, plus a connected computer. Figure 1 shows how the zSpace system is connected to the CAVE2. Each monitor column is connected to one out of 20 node computers (n) which are synchronized via a head node server (s). The zSpace system (z) communicates unidirectionally with the head node server. Figure 2 shows a photo of the zSpace/CAVE2 setup. Software The two displays are used in conjunction with two different software packages. The navigator software used for the Space Map is an extended version of the CELLmicrocosmos 1.2 CellExplorer (CmCX, available at http://Cm1.CELLmicrocosmos.org) [21], [22], whereas the high quality rendering engine supporting largescale visualization is Omegalib [19]. CmCX is a software which is used for educational as well scientific cell exploration and visualization [23]. In its context, a number of different cell models were generated – some of these models will be used for the experiments in the following sections. The navigation actions are transferred from CmCX to Omegalib using a TCP/IP connection. By using CmCX as a navigation interface, the navigator is able to use a 3D overview map, including associated background information shown on a separate 2D monitor, plus an overview of the whole tour (Figure 3.1). In this way, the interactivity is maintained, while the camera in the virtual world moves in a movie-like fashion between different points of interest. This approach is similar to the Worlds in Miniature (WIM) approach [16]. Here, the idea is to present a smaller simplified model of the virtual world to improve the navigation and interaction with the virtual environment. The initial approach integrated the WIM into the virtual environment – it was floating in front of the user while he was fully immersed in the virtual world. An advantage of our approach is the fact that the WIM does not block the view between the virtual world and the navigator, because it is placed in the center of the CAVE2. Figure 3 Stereoscopic Space Map – Software: 3.1: the plant cell as shown in Space Map in Full Screen mode;; 3.2: the tour shown in the Stereoscopic Space Map: left top: the cell shown in top/bottom stereo mode, left bottom: the slider for the stereoscopic eye distance in the Space Map SS3D_CmCX, and below the slider for the stereoscopic eye distance in Omegalib SS3D_OL, right: the tour and its sequential tour points ©2016 Society for Imaging Science and Technology DOI: 10.2352/ISSN.2470-1173.2016.5.SDA-429 IS&T International Symposium on Electronic Imaging 2016 Stereoscopic Displays and Applications XXVII SDA-429.3 The Space Map realized with the zSpace and CmCX provides different interaction techniques supporting six degrees of freedom (6DOF). Navigation using the zStylus pen was first introduced in our previous work discussing a hybrid-dimensional visualization and interaction approach by using the zSpace [17]. In Floating Mode it is possible to highlight each component of a model and – in case nothing was selected – to freely move through the virtual map. By selecting an object and pressing the center button of the zStylus pen, the user changes from Floating Mode to Object-Bound Mode. In this mode, the user can rotate around selected components by vertically and horizontally moving the zStylus pen, and the distance towards the component can be changed by moving the zStylus pen forward or backward, respectively. The position and orientation of the zStylus pen is mapped to a 3D pointer in the virtual map which can also be used to change the perspective in the virtual world shown in the CAVE2. For this purpose, the navigator can use the left zStylus pen button to transfer the actual view of the zSpace to CAVE2. Alternatively, clicking the right mouse button will transfer the orientation and position of the 3D mouse pointer to the CAVE’s camera. The direction of the pointer reflects the direction of the camera. In this way, 6DOF camera positioning is possible. Stereoscopic 3D Methods It was previously mentioned that it is problematic to use CAVE2 with hybrid-dimensional visualization in case virtual worlds should be presented to an audience. The combination of a 2D interface with a 3D virtual world will be problematic because of potential pop-out effects associated with the virtual world: the 2D visualization area would be usually located in the projection plane (Figure 4.1, Aprojection), but 3D objects in the virtual world would be often located between the projection plane and the near plane (Figure 4.1, Anear), exceeding the 2D layer. Moreover, it would be also problematic to show the 3DS overview map in front of the virtual world, because the stereoscopic visualization would have to be adjusted for a) the virtual world in the background, as well as b) for the virtual map in the foreground. In this way, the virtual world should be restricted to the space between a) Aprojection and Afar, and the virtual map would have to be visualized between b) Aprojection and Anear (Figure 4.1). In this way, pop-out effects for the virtual world would have to be heavily restricted or completely omitted – excessively limiting the representation of the virtual world. 3D-stereoscopic navigation requires special precautions, especially if large differences in scale have to be bridged. In our previous publication we introduced two different methods to optimize the stereoscopic vision. Here, our previously discussed dynamic interactive projection plane method is used which is especially relevant in case large virtual cells are visualized, bridging the molecular and mesoscopic scale with differences of up to a factor of 100,000 [12]. Figure 4.2 illustrates the static interactive projection plane method. Here, the user navigates around the center of a single object. In our zSpace-based approach this is the case if the user moves around the center of the selected object by keeping the center zStylus button pressed while moving the zStylus pen. The eye distance decreases the closer the navigator moves towards the object. Figure 4.3 depicts the dynamic interactive projection plane method. Because in this situation many objects of different size and location exist in the virtual world, the eye distance has to be adjusted based on the closest object to the user. Also here the eye distance decreases the closer the navigator moves towards the object, but the reference object is always the one in the center of the view. Figure 4: Automatic computation of the stereoscopic eye distance: 4.1: Correlation between eyes' distance and their distance to the picked point. de: distance of eyes, dp: distance between eyes' center and Pp, Pp: picked point, Aprojection: projection plane, Anear: near plane, Afar: far plane;; 4.2: static interactive projection plane S3D method;; 4.3 dynamic interactive projection plane S3D method The stereoscopic eye distance in the Space Map is defined as E"#$%_'(') = ++,-_./.0
منابع مشابه
Multi–Touching 3D Data: Towards Direct Interaction in Stereoscopic Display Environments coupled with Mobile Devices
In many different application domains the use of 3D visualization is accelerating. If the complexity of 3D data increases often stereoscopic display provides a better insight for domain experts as well as ordinary users. Usually, interaction with and visualization of the 3D data is decoupled because manipulation of stereoscopic content is still a challenging task. Hence, 3D data is visualized s...
متن کاملA Mixed Reality Space for Tangible User Interaction
Recent developments in the field of semi-immersive display technologies provide new possibilities for engaging users in interactive three-dimensional virtual environments (VEs). For instance, combining low-cost tracking systems (such as the Microsoft Kinect) and multi-touch interfaces enables inexpensive and easily maintainable interactive setups. The goal of this work is to bring together virt...
متن کاملImmersive Projection Technology for Visual Data Mining
The PlatoCAVE, the MiniCAVE, and the C2 are immersive stereoscopic projectionbased virtual reality environments oriented toward group interactions. As such they are particularly suited to collaborative efforts in data analysis and visual data mining. In this article, we provide an overview of virtual reality in general, including immersive projection technology, and the use of stereoscopic disp...
متن کاملEvaluating Methods for Controlling Depth Perception in Stereoscopic Cinematography
Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change ...
متن کاملBioWIM: A MultiSurface, MultiTouch Interface for Exploratory Visualization of Biomedical Data
This paper presents the BioWIM, a multi-surface, multi-touch interface for exploring visualizations of complex biomedical environments using a semi-immersive virtual reality (VR) display. The BioWIM reinterprets the traditional world-inminiature (WIM) VRmetaphor within the context of navigating visualizations based on medical imaging data and using the unique capabilities provided by coupling a...
متن کامل